6 research outputs found

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8Ëš to 6.4Ëš compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Toward adaptive and intelligent electroadhesives for robotic material handling

    Get PDF
    An autonomous, adaptive, and intelligent electroadhesive material handling system has been presented in this paper. The system has been proposed and defined based on the identification of a system need through a comprehensive literature review and laboratory-based experimental tests. The proof of the proposed concept has been implemented by a low cost and novel electroadhesive pad design and manufacture process, and a mechatronic and reconfigurable platform, where force, humidity, and capacitive sensors have been employed. This provides a solution to an autonomous elelctroadhesive material handling system that is environmentally and substrate material adaptive. The results have shown that the minimum voltage can be applied to robustly grasp different materials under different environment conditions. The proposed system is particularly useful for pick-and-place applications where various types of materials and changing environments exist such as robotic material handling applications in the textile and waste recycling industry

    Human skill capture: A hidden Markov model of force and torque data in peg-in-a-hole assembly process

    Get PDF
    A new model has been constructed to generalise the force and torque information during a manual peg-in-a-hole (PiH) assembly process. The paper uses Hidden Markov Model analysis to interpret the state topology (transition probability) and observations (force/torque signal) in the manipulation task. The task can be recognised as several discrete states that reflect the intrinsic nature of the process. Since the whole manipulation process happens so fast, even the operator themselves cannot articulate the exact states. Those are tacit skills which are difficult to extract using human factors methodologies. In order to programme a robot to complete tasks at skill level, numerical representation of the sub-goals are necessary. Therefore, those recognised ‘hidden’ states become valuable when a detail explanation of the task is needed and when a robot controller needs to change its behaviour in different states. The Gaussian Mixture model (GMM) is used as the initial guess of observations distribution. Then a Hidden Markov Model is used to encode the state (sub-goal) topology and observation density associated with those sub-goals. The Viterbi algorithm is then applied for the model-based analysis of the force and torque signal and the classification into sub-goals. The Baum-Welch algorithm is used for training and to estimate the most likely model parameters. In addition to generic states recognition, the proposed method also enhances our understanding of the skill based performances in manual tasks

    Recommending Location Privacy Preferences in Ubiquitous Computing

    No full text
    Poster abstract presented at the 7th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec) (http://www.sigsac.org/wisec/WiSec2014/

    Improving human robot collaboration through force/torque based learning for object manipulation

    No full text
    Human-Robot Collaboration (HRC) is a term used to describe tasks in which robots and humans work together to achieve a goal. Unlike traditional industrial robots, collaborative robots need to be adaptive; able to alter their approach to better suit the situation and the needs of the human partner. As traditional programming techniques can struggle with the complexity required, an emerging approach is to learn a skill by observing human demonstration and imitating the motions; commonly known as Learning from Demonstration (LfD). In this work, we present a LfD methodology that combines an ensemble machine learning algorithm (i.e. Random Forest (RF)) with stochastic regression, using haptic information captured from human demonstration. The capabilities of the proposed method are evaluated using two collaborative tasks; co-manipulation of an object (where the human provides the guidance but the robot handles the objects weight) and collaborative assembly of simple interlocking parts. The proposed method is shown to be capable of imitation learning; interpreting human actions and producing equivalent robot motion across a diverse range of initial and final conditions. After verifying that ensemble machine learning can be utilised for real robotics problems, we propose a further extension utilising Weighted Random Forest (WRF) that attaches weights to each tree based on its performance. It is then shown that the WRF approach outperforms RF in HRC tasks.</div

    Investigation on Titanium Silicalite‑1 Zeolite Synthesis Employing ATPAOH as an Organic Structure Directing Agent

    No full text
    Tetrapropylammonium hydroxide (TPAOH) as an organic structure directing agent (OSDA) is of great importance for the preparation of titanium silicalite-1 (TS-1) zeolite. In this paper, we employed a new OSDA, allyltripropylammonium hydroxide (ATPAOH), in the synthesis process and successfully synthesized ATS-1 zeolite (MFI type). Compared with traditional OSDA TPAOH, one of TPAOH’s propyl groups is substituted by an allyl group, which endows ATPAOH with unique properties. On the one hand, ATPAOH accelerates the crystallization rate of titanium silicalite zeolite remarkably due to the strong interaction between Ti species and ATPAOH during the crystallization period. On the other hand, ATPAOH is beneficial for the formation of isolated 6-coordinated Ti species, thus leading to the generation of lower amount of anatase. Owing to its abundant active Ti species, ATS-1 prepared by ATPAOH as OSDA exhibits a much better catalytic performance for the cyclohexanone ammoximation reaction than TS-1 prepared by TPAOH as OSDA
    corecore